perm filename IJ[TLK,DBL] blob sn#175712 filedate 1975-08-31 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00013 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.DEVICE XGP
C00005 00003	.MACRO B ⊂ BEGIN VERBATIM GROUP ⊃
C00009 00004	.NSEC(Experts and Beings)
C00020 00005	.NSEC(The Program the Experts Wrote)
C00039 00006	.NSEC(Anatomy of Synergetic Cooperation)
C00052 00007	.NSEC(An Example: A Few Beings in Action)
C00062 00008	.NSEC(Internal Details of Beings)
C00077 00009	.NSEC(Theory of Pure Beings Systems)
C00082 00010	.SSEC(Comparison to Other Schemes)
C00089 00011	.NSEC(Experimental Results)
C00106 00012	.NSEC(Conclusions)
C00113 00013	.COMMENT table of contents
C00114 ENDMK
C⊗;
.DEVICE XGP

.FONT 1 "BASL30"
.FONT 2 "BASB30"
.FONT 4  "BASI30"
.FONT 5  "BDR40"
.FONT 6  "NGR25"
.FONT 7  "NGR20"
.FONT 8  "GRFX35"
.FONT A "SUP"
.FONT B "SUB"
.TURN ON "↑α↓_π[]{"
.TURN ON "⊗" FOR "%"
.TURN ON "@" FOR "%"
.PAGE FRAME 54 HIGH 89 WIDE
.TITLE AREA HEADING LINES 1 TO 2
.AREA TEXT LINES 4 TO 53
.COUNT PAGE PRINTING "1"
.TABBREAK
.ODDLEFTBORDER←EVENLEFTBORDER←850
.AT "ffi" ⊂ IF THISFONT ≤ 4 THEN "≠"  ELSE "fαfαi" ⊃;
.AT "ffl" ⊂ IF THISFONT ≤ 4 THEN "α∞" ELSE "fαfαl" ⊃;
.AT "ff"  ⊂ IF THISFONT ≤ 4 THEN "≥"  ELSE "fαf" ⊃;
.AT "fi"  ⊂ IF THISFONT ≤ 4 THEN "α≡" ELSE "fαi" ⊃;
.AT "fl"  ⊂ IF THISFONT ≤ 4 THEN "∨"  ELSE "fαl" ⊃;
.SELECT 1
.MACRO B ⊂ BEGIN VERBATIM GROUP ⊃
.MACRO E ⊂ APART END ⊃
.MACRO D ⊂ ONCE PREFACE 100 MILLS ⊃
.MACRO BB ⊂ BEGIN NOFILL SELECT 6 INDENT 0 GROUP PREFACE 0 ⊃
.MACRO BBB ⊂ BEGIN  INDENT 0,3,0  PREFACE 0  SINGLE SPACE ⊃
.MACRO BGIV ⊂ BEGIN NOFILL SELECT 7 INDENT 0 PREFACE 0  TURN OFF "{}" TURN ON "↑↓" ⊃
.MACRO B0 ⊂ BEGIN  WIDEN 2,7 SELECT 8 NOFILL PREFACE 0 MILLS TURN OFF "↑↓α"  GROUP ⊃
.MACRO B7 ⊂ BEGIN  WIDEN 7,7 SELECT 8 NOFILL PREFACE 0 MILLS TURN OFF "↑↓α"  GROUP ⊃
.MACRO W(F) ⊂ SELECT F NOFILL SINGLE SPACE; PREFACE 0; WIDEN 7,7 ⊃
.MACRO FAD ⊂ FILL ADJUST DOUBLE SPACE PREFACE 2 ⊃
.MACRO FAS ⊂ FILL ADJUST SINGLE SPACE PREFACE 1 ⊃
.FAD

.AT "[[" ⊂
[⊗4Slide: 
.⊃

.AT "[V" ⊂
[⊗4Vu-graph:
.⊃

.AT "]]" ⊂
⊗*]
.⊃

.MYFOOT←1
.FOOTSEP←"________________________________________________________________________________________"
.COUNT FOOTNOTE INLINE PRINTING "⊗A1⊗*"
.AT "$$" ENTRY "*" ⊂ XGENLINES←XGENLINES-1; NEXT FOOTNOTE; !;
.SEND FOOT ⊂ TURN ON "[]{" SELECT 7; SPACING 0; PREFACE 0; INDENT 0,10
⊗A{MYFOOT}⊗* ENTRY
.MYFOOT←MYFOOT+1
.BREAK ⊃ ⊃

.MACRO NSECP(A)  ⊂   SSECNUM←0
.SECNUM←SECNUM+1 
.SECTION←"A"
.SKIP TO COLUMN 1
.TURN ON "{∞→"   
.SEND CONTENTS ⊂
@1{SECNUM}. A⊗* ∞.→ {PAGE}
.⊃
.TURN OFF "{∞→"   
.ONCE CENTER TURN ON "{}"
@5↓_{SECNUM}. A_↓⊗*  
.  ⊃


.MACRO ASEC(A)  ⊂  SSECNUM←0
.SECTION←"A"
.SKIP TO COLUMN 1
.TURN ON "{∞→"   
.SEND CONTENTS ⊂
@1 A⊗*
.⊃
.TURN OFF "{∞→"   
.ONCE CENTER TURN ON "{}"
@5↓_A_↓⊗*  
.  ⊃


.MACRO NSEC(A)  ⊂  
.SSECNUM←0
.SECNUM←SECNUM+1 
.TURN ON "{∞→"   
.SEND CONTENTS ⊂
@1{SECNUM}. A⊗* ∞.→ {PAGE}
.⊃
.TURN OFF "{∞→"   
.SECTION←"A"
.GROUP SKIP 3
.ONCE CENTER TURN ON "{}"
@5↓_{SECNUM}. A_↓⊗*  
.  ⊃


.MACRO SSEC(A)  ⊂  TURN ON "{∞→"   
.SSECNUM←SSECNUM+1
.SSSECNUM←0
.SEND CONTENTS ⊂
@7               A⊗* ∞.→ {PAGE}
.⊃
.TURN OFF "{∞→"   
.ONCE INDENT 6 TURN ON "{}"
@2↓_{SECNUM}.{SSECNUM}. A_↓⊗*  
. ⊃

.MACRO ASSEC(A)  ⊂  TURN ON "{∞→"   
.SEND CONTENTS ⊂
@7               A⊗*
.⊃
.TURN OFF "{∞→"   
.ONCE INDENT 6 TURN ON "{}"
@2↓_A_↓⊗*  
. ⊃

.MACRO SSSEC(A)  ⊂  TURN ON "{∞→"   
.SSSECNUM←SSSECNUM+1
.TURN OFF "{∞→"   
.ONCE INDENT 1 TURN ON "{}"
@2↓_{SECNUM}.{SSECNUM}.{SSSECNUM}. A_↓⊗*  
. ⊃

.SECNUM←0
.SELECT 1
.INSERT CONTENTS
.PAGE←0
.PORTION THESIS
.NARROW 2,7
.TURN OFF "{∞→}"   
.PAGE←0
.NEXT PAGE
.INDENT 0
.FAD
.TURN OFF "{∞→}"   
.EVERY   HEADING(⊗7Beings:   Text   of  4IJCAI-75  Talk⊗*,⊗6D.   Lenat⊗*,⊗7{DATE}⊗*    ⊗4page {PAGE}⊗*)
.NSEC(Experts and Beings)

[[title]]
One  central problem  in AI  seems  to be:  how  to manage  knowledge
⊗4efficiently.⊗*  We  have  to  date  observed  only  one  reasonably
efficient  system,  [[human  brain.]]  and  the  manufacturer   seems
reluctant to divulge the details  of its internal program. Instead of
studying the  gross behavior of one of these, maybe we should look at
a bunch of them interacting, like during a brainstorming session.  So
let's   talk  about   a  meeting   where   several  interdisciplinary
specialists [[Outline]] work together to solve a hard problem.   I'll
call  your  attention  to  a  few  characteristic  features  of  this
scenario.   By  using these  features as  guidelines, as  constraints,
I've programmed a simulation of that meeting. 

Picture now in your mind a group of cooperating human
experts, discussing a problem at a conference table.  Externally, all
they do is [[3 calls]].  question and answer each other.  The flow of
control is not  strictly hierarchical, but  rather passes to  whoever
wants it most at the time.  [[3 returns]]. 

What would it  mean to ⊗4simulate⊗* such a  meeting?  Imagine several
little programs, each one modelling  a different expert. What  should
each  program,   called  a  ⊗4Being⊗*,   be  capable  of?     [[Being
capabilities]]  So far,  our list  of  capabilities will  include the
following: Each Being must have specific facts and strategies for its
designated area of  expertise.  It must interact  via questioning and
answering  other Beings. Each Being should  be able to recognize when
it is relevant.   So  a Being is  a both a  knowledge module (like  a
Frame) and a control module (like an Actor). 

Let's return to  our meeting of human experts and  try to gather some
more characteristics.  To be more concrete, [[meeting]] suppose their
task is to design  and code a large computer program:  say, a concept
formation  system.   Experts who  will  be useful  include scientific
programmers, non-programming  psychologists, AI  researchers,  system
hackers,  and management  personnel.   At any  given moment,  when  an
expert  participates in the  discussion, he will  either be answering
some colleague or else transferring a tiny bit of knowledge about his
field into a programmed  function which fits somewhere into the final
concept formation program.  In that sense, the final code the experts
write reflects fragments of their knowledge. 

How could  Beings do this?   Each Being  models one of  the experts. 
[[names  of Beings]]  There would  be  one little  program containing
information about psychology (much more than would be used in writing
any single concept formation program), another Being who knows how to
manage  a  group  to  write  large  programs,  and  many  lower-level
specialists. 

The original community of Beings might be able to carry out a concept
formation  task immediately,  as  could the  human  experts, but  the
Beings should contain far too much information, far too inefficiently
represented, to be able  to say "we ourselves constitute  the desired
program!" They would  have to discuss the concept formation task, and
write short, fast, specialized versions of themselves, programs which
do just enough to carry out the task. 

.GROUP
.NSEC(The Program the Experts Wrote)

An experimental system of  100 Beings, called PUP6, was  designed and
implemented, and  managed to synthesize a concept formation program. 
[⊗4On board:⊗*] Don't get confused here. I wrote these  Beings, PUP6.
PUP6 interacts with a user to  write a concept formation program, CF.
CF then  interacts with a teacher to learn concepts. This activity is
concept attainment,  this is  automatic programming, and  this is  AI
research. 

.SKIP 15 

.COMMENT THIS GIVES ROOM TO DRAW IN THE LITTLE DIAGRAM;

.APART

In a couple  minutes we'll see how a few toy  Beings [⊗4wave sheet of
10 B.⊗*] interact to  synthesize part of CF,  so first we'll look  in
detail at that concept formation program.   Its external behavior can
be  specified as follows:  descriptions of  structures [[arch]] built
out of simple geometrical shapes  will be read in one after  another.
For each such scene, CF must guess its name.  The correct name of the
structure  will then be typed in.   CF must quickly learn to identify
simple structures (ARCH, TOWER), and must never make the same mistake
twice in a row.  

This is the only specification that the experts -- or, later, PUP6 --
will ever receive. The problem of what this means, and how to design algorithms
and data structures for CF, still remains.

CF  is actually  quite  unsophisticated, even  judged  by the  meager
knowledge present in the Psychologist  Being. For example, it  cannot
learn disjunctive concepts.  
[[monuments]] 
CF would have a very confused model of a
⊗4Monument⊗* if you made it include 
a tower, a pyramid,
and a square plate. 

One of the  first things  I did in  this research 
[[experts  called for]] 
was  to simulate  a
group  of people  working to  write CF.  A 300-page  long script  was
completed  by hand.   I will refer  to this as  the ⊗4protocol⊗*.   In
all, 87 different experts were called for: 
At
the  end, a  concept  formation program  had  actually been  written.
Let's go a little deeper and  see the algorithm that this CF  program
uses. 

One of the experts at the simulated  meeting must have read Winston's
dissertation, because  the synthesized concept formation program, CF,
was remarkably similar to the one Winston describes. 

CF repeatedly scans a scene and  tries to name it. The scene is  read
in as a set of objects and  a set of features [[scene]]. A feature is
a  relation  on  those  objects.    CF  maintains  a  model for  each
differently-named scene  it has  ever read in.   A  model contains  a
description of the objects one  expects in such a structure, a set of
features which ⊗4must⊗* be present in  any scene having this name,  a
set of features which ⊗4must not⊗* be present if the scene is to have
this name, and a set of features which ⊗4may⊗* be present or absent. 
Thus a model is an archetypical scene plus a name.  For example, part
of a scene might be described as: [⊗4gesture to scene⊗*], and a typical
model might be like this: [⊗4gesture to ARCH model⊗*]. 


Each time it gets a new scene, CF scans its models until it finds one
which matches the scene. A model is said to match a  scene if all the
MUST features  associated with that model are  observed in the scene,
and all the MUSTNOT  features are absent from  the scene. 
[[CF flowchart]]
CF  informs
the user of  this guess, and accepts  the proper name. If  it guessed
incorrectly, CF modifies its models.  A ⊗4concept⊗* here simply means
a model; i.e.,  all scenes having  a given  name.  Concept  formation
here is simply the process of classifying scenes by name. 

.NSEC(Anatomy of Synergetic Cooperation)

Now we come to a hard question: how did the experts, and how should
our Beings, go from the brief external specification of CF to a long,
detailed, running LISP program?  Consider the birth of even one small
idea necessary  in the  writing of  CF -- the  idea to  classify each
model's  features into  three categories  (MUST, MUSTNOT,  MAY)).   No
single specialist  at the  meeting could  have  had such  an idea  by
himself.  How did their intellects mesh, effectively communicate, and
unite their powers?  Of course any answer to a question like  that is
either pretentious or partial. [[partial]] 

Why is the  group of experts a  productive entity?  There  seem to be
two  key abilities:  [[abilities]] (1) first,  for any  given problem
that crops  up, ⊗4some⊗*  experts  in the  group understand  it  well
enough  to   begin  work  on   it;  (2)  second,  the   members
intercommunicate  well enough that each expert  recognizes when he can
be of assistance, recognizes when he needs assistance,  and correctly
interprets the answers other experts provide.  [[but how?]]

I  claim that [[how?]]  the group  is able to  handle a  complex task
because the experts  are so diverse  in abilities; and  I claim  that
they can  comunicate to  the necessary  extent because  they are  all
similar in basic cognitive structure (in the anatomy of their minds).
This is the basic idea I have to bring to you.  [⊗4If VU-graph is on or
v. easy, then show: HOW: (???)⊗*]

The experts' minds  must be similar enough  that they can  understand
each  other's  questions,  and meaningfully  interpret  each  other's
answers, but their minds must be different enough to give the group
as a whole abilities which no single member possesses. 


The hypothesis is [[equiv]] that each expert can be  treated as if he
consisted just of a categorized mass of information, where the set of
categories is  about  the same  for  each expert,  but  the  detailed
knowledge stored in  each category varies greatly from  individual to
individual. 

These categories of knowledge indicate the ⊗4types⊗* of questions any
expert  can  be  expected  to  answer.    An  expert   is  considered
⊗4equivalent⊗* to  his answers to  several standard questions.   Each
expert  has the same mental "parts", it  is only the values stored in
these parts, their contents, which distinguish him  as an individual.
Notice  that some  of these  parts [⊗4gesture  Respond to⊗*]  deal with
taking control, with  doing things,  while others hold purely static  factual
information. 

Armed with this  dubious view of  intelligence, let's return  to the
design of Beings.
[[Wilde]]
We're about ready to get specific.  

Each  Being  consists  of a  mass  of  information, partitioned  into
several categories or ⊗4parts⊗*.   [[internal structure]] Each  Being
shall have  many parts, each possessing  a name (a question  it deals
with) and a value (a procedure capable of answering that question). 

By analogy  with the human experts, each Being will have the same set
of parts [[PSYCH Being]] (will  answer the same kinds of  queries), and
this uniformity should  permit painless intercommunication. Since the
paradigm of the meeting  is questioning and  answering, the names  of
the parts should cover all the types of questions one expert wants to
ask  another.

Once again:  the concept of  a group of  Beings is that  many modules
coexist, each having a complex structure, but that structure does not
vary from Being  to Being.  This  idea has analogues in  many fields:
transactional  analysis in  psychology, anatomy in  medicine, modular
design in architechture. 

To test  out this  idea,  I programmed  a group  of Beings,  PUP6,  a
modular system which interacts with a human user and generates the CF
program.     There  are   3  steps  I  followed   in  creating  PUP6:
[[procedure]]

.BEGIN INDENT 4,9,0

(1)  Gather  all communications  that  might  occur  between  experts
working together to write CF. 

(2) Distill this  into a core of simple questions,  Q.  The size of Q
is very important.  If ||Q|| is too large, it will be hard to add new
Beings.   If ||Q||  is too  small, all  the non-uniformity  is simply
pushed down into the values of one or two general catchall questions. 

(3)  For each expert needed,  write one Being. Each  time that expert
speaks in  the protocol,  analyze what  knowledge he  is using,  what
Being part that would come under,  and fill in that knowledge in that
part. 

.END

.NSEC(An Example: A Few Beings in Action)

[⊗4turn off slide projector, prepare to turn on vu-graph projector⊗*]

The PUP6 system is too big to look at in detail, but I've pulled out a few
parts of about 10 of the Beings, and together I think we'll get them to
think up
the idea of separating each model's features into Must/Mustnot/May categories.

Let me set the stage for you. 
PUP6 has already learned that it is supposed to write a concept
formation program, one that involves repeatedly reading in a scene and
trying to name it.  PUP6 has decided that CF will do this by comparing the
scene against each already-known concept model, until a match is found.
[Vstage]]
The user says that a model fails to match a scene if any one of the model's features
is incompatible with the scene's features.   Some Being 
translates this into LISP code involving all executable expressions, except for
"is incompatible with".

Here is where our little group of sample Beings begins to do things.
The sheet I handed out lists the knowledge contained in each part of the 10
Beings I'll be mentioning by name. 
As I talk through the discussion, you can try to follow along
on the sheet and figure out at each moment why this particular Being is
speaking, and how he is able to say what he is saying.

CONTRADICTION is the Being who recognizes the phrase
"is incompatible with". (You should now be looking at the
parts of the Being named CONTRADICTION, on your sheet.)
The two arguments to CONTRADICTION in this case are just
F and Scene-F.  CONTRADICTION braodacasts a plea to 
everyone to examine F and make some
absolute statements
about it.

MEMBERSHIP says that F might be a ⊗4member⊗* of Scene-F.

PROBABILITY says that F's
probability of being a member of Scene-F is either 1, 0, or in between.
While not overly profound, this is enough to let
ALTERNATIVES recognize three separate cases. ALTERNATIVES now can
encode this matching test as a conditional: [V COND 1]]

The COND has a separate branch for each probability-value of F, and 
ALTERNATIVES has naively added a fourth default branch so no  F can ever
fall off the end of the COND.

Eight small pieces of code have to be written yet: [Vpoint to tests and responses]].
CONTRADICTION says that this first response [V transform]] is the contradiction of
F being present in Scene-F. 
That is, F should be one of the scene features, but it isn't.
MEMBERSHIP and CONTRADICTION both recognize parts of this phrase, and they rewrite it
as this [Vgesture to:
(negation (MEMBER F Scene-F))]]. NEGATION finishes the job.
[VCOND 2]]

Similarly, the second response becomes 
(NOT (NOT (MEMBER F Scene-F))),
which NEGATION rewrites to (MEMBER F Scene-F).
[VCOND 3]]

For the third response, CONTRADICTION wants someone to decide whether features and
their negations might ever occur in the same scene. MESSENGER claims he can decide
this; actually he  asks the user about it. Assuming the user says this
could never happen, CONTRADICTION decides that no contradiction could
ever take place, hence replaces this third response by NIL. [VCOND 4]]

Suppose now the system decides to encode these dividing tests. BRANCHING-TEST
recognizes his relevance and takes over. He says it would be very easy for
him to encode 
them as simple
calls  on the LISP function "MEMBER".  
MEMBERSHIP responds to this, but can't say anything productive.
STRUCTURE-INDUCER also feels relevant, and he is really the one we want.
Since Model-Features are as yet unstructured, STRUCTURE-INDUCER 
can simply assert that CF will keep each model's set of features
partitioned into four separate sets.
As you see by looking at his parts, STRUCTURE-INDUCER has several other
chores, such as
asking the user for 4 new names. But he defers doing these a little while.

[VCOND 5]]
These branching tests are now rewritten [Vas (MEMBER  F  PROB-1-PART-OF-MODEL-F),
(MEMBER  F  PROB-0-PART-OF-MODEL-F),
(MEMBER  F  PROB-⊗6>0&<1⊗*-PART-OF-MODEL-F), and "otherwise".]]

⊗1The awkward names were generated by PLAUSIBLE-NAMER.

LONG-NAME-AVOIDER [VCOND 6]] now 
shortens these identifiers [Vto P1-MODEL-F, P0-MODEL-F,
and P01-MODEL-F.]]

BRANCHING-TEST comments that
if all items fall into one of the first three categories, then the fourth
case of the COND (and the fourth subdivision of Model-F) can be forgotten about.

PROBABILITY says that this is in fact true, so Model-F will be maintained as
3 separate sets, not four, and BRANCHING-TEST wipes away this line [Vgesture]] and 
replaces
this test by T.  [VCOND 7]] 

STRUCTURE-INDUCER ⊗4now⊗* asks the user for the names of the
⊗4three⊗* parts of Model-F; that is, for each model, the features of that model
will be stored internally in one of three separate lists. 
Notice that because he waited, STRUCTURE-INDUCER now only has to ask for 3 new
names, not 4.
The user says to
call these MUST, MUSTNOT, and MAY, and STRUCTURE-INDUCER 
finalizes the structuring. [VCOND 8]]
He searches out all the old references to
Model-F, and replaces each by (APPEND MUST MUSTNOT MAY).

All the user sees are a few lines of dialogue, like this: [V dialogue]].

I realize that from the sketchy sheet you have, it's not clear ⊗4precisely⊗*
how the right Being spoke up at just the right moment, and how he did
⊗4all⊗* that he did, but 
I hope you've glimpsed how this might
actually be worked out, in detail, if you have 87 Beings with
29 large parts, instead of 10 Beings with a few small parts just sketched in.

.NSEC(Internal Details of Beings)

[⊗4turn off vu-graph projector, prepare to turn on slide projector⊗*]

And in the real PUP6 system there ⊗4were⊗* 87 BEINGs [[PUP6 Beings]],
and each had most of these 29 kinds of parts [[parts]].  (I hope your
not missing anything off these slides.)

[[3 categories]] The set of parts breaks into three rough categories:
(1) parts which help this Being  get control at the proper times, (2)
parts containing  knowledge the Being uses when it gains control, and
(3) parts used  mainly to  answer the user's  questions and keep  him
oriented. 


While PUP6  is running, the user  can interrupt at any  moment. If he
asks a common  question, like WHY,  HOW, What will  this affect,..  ,
then all the system has  to do is retrieve the part  with that label,
from  the Being  currently in  control, or  one of  his predecessors.
This gives a good illusion that the system actually  understands what
it is doing. 

For aesthetic uniformity, and also  because of a fortuitous bug, PUP6
wrote   CF   as   a   pool   of   Beings   itself.      Although   CF's
question-answering ability is  inferior to PUP6's,  the fact that  CF
had ⊗4any⊗* such power was a shock to me.  That is, as the teacher is
working with the synthesized  CF program, not only  does it learn  to
classify structural scenes, but it answers  questions about what it's
doing; you  can interrrupt CF as  it is running and  ask for parts of
Beings.   As  an  example,  suppose  the  synthesized  CF  program  is
running; [[CF  excerpt]] it has just  typed out "The scene  is not an
Arch".  The  user  can interrupt,  and  ask WHY,  and  CF  will reply
"Because the relation (Supports A B) is a MUST relation for Arch, but
is absent in  the scene". 

.NSEC(Theory of Pure Beings Systems)

By now we have  some new constraints that Beings  must satisfy. Let's
throw in a few personal biasses as well.  [[new constraints]]

It  would be aesthetically pleasing  to restrict all  entities in the
system to be  Beings (including all parts  of all Beings).   However,
this would cause  an infinite regress.  To stop  this, one asserts
that at some finite level, all constructs are primitive.  ACTORs, for
example, set this  level to zero;  Beings set it  to one.   Predicate
calculus lets this number be  unbounded: a part of a wff at any depth
can still  be a  wff.   ACTORs  themselves  are primitive,  but  only
⊗4parts⊗* of Beings are primitive.


One bias of mine, reflected in PUP6, is the rejection of debugging as
a necessary programming tool.  Ignoring details is sometimes a sign
of planning,  but often  just plain  carelessness.   Humans depend on
their  adaptability  to  compensate for  limitations  in  their brain
hardware, but  there  is no  need  for an  ⊗4automatic⊗*  programming
system to  do so.  Any  tireless system need  not ignore details, but
should settle  them  or carefully  defer  them. Deferral  is  a  much
maligned  way of  solving problems.    Procrastination is  often  good
policy if new information is pouring in continually anyway.

These biasses are not inherent in the Beings formulation, but only in
the design of the PUP6 system (and in my mind). 

.SSEC(Comparison to Other Schemes)

To clarify what Beings are  and are not, I will now go out  on a limb
and  contrast them with my  interpretation of some  similar AI ideas.
[[contrast]] FRAMES are sufficiently amorphous to subsume Beings.
FRAMES are typically knowledge modules, and not so much control
modules as Beings are. 

PUP6 goes out of its way to avoid the
kind of debugging that is the backbone of HACKER. 

.SELECT 1

ACTORs, unlike Beings, have  no fixed structure  imposed, and do  not
broadcast their messages,  and ACTORS change their  formal definition
every six months  instead of every year. (((Also, Carl Hewitt says that
ACTORS are related to sterility.))) ACTORS are control modules,  not so
much knowledge modules as Beings are. 

Beings subsume many popular AI features; if you care, read my paper. 
Inefficiencies come in because each Being broadcast is very costly. 

.SSEC(Structure vs Uniformity)

[⊗4If time is short, skip this subsection completely.⊗*]

The number of parts each Being has  seems to indicate the
balance between uniformity and  structure in the community.   The set
of Being parts  must be small and universal, to  preserve some of the
advantages of uniformity (easy addition  of knowledge to the  system,
easy inter-Being  communication).   This demands  that the number  of
parts of each  Being be, say, under 100.  [⊗4On board, write: ⊗6||Q||
< 100⊗1]  

But it  is the  complex structure  of a  Being which  makes
complex behaviors feasable, including  flexible communication as well
as viable  final products.  So each Being should have many parts, say
at least ten.   [⊗4On board, add: ⊗610  < ||Q|| < 100⊗1]

This range,
between 10 and 100, is  fine for the domain of automatic programming.
In other  domains, it  may be  inappropriately small or large;  this
would indicate  that  Beings could  ⊗4not⊗* be  used effectively  for
those tasks. 

.NSEC(Experimental Results)

[[Actual opening dialog]]
By lumping all the  Beings of the PUP6 system  together conceptually,
the interaction is seen as a ⊗4dialogue⊗* between a human user and an
automatic programming system.  

Because of its genesis from a single "experts meeting"  protocol, 
[[PUP6 Performance]]
the
PUP6 pool  of Beings was (i)  easily able to reproduce  that "proper"
protocol dialogue, but (ii) incapable of widely varied dialogues with
the user.  This  is Okay,  since  PUP6 is  not  after all  a  natural
language system. 

PUP6 did  eventually synthesize CF and 2 other programs.

.NSEC(Conclusions)


Although far from spectacular, PUP6 ⊗4has⊗*  [[conclusions]]
demonstrated the  feasability  of  Beings for large tasks.

Two advantages were hoped for by  using a uniform set of Being parts.
Addition of  new Beings  to the  pool  was easy  for me  and for  the
Beings, but hard for  the untrained user.  Communication among Beings
was surprisingly fluid.    Tens  of  thousands  of
messages had to pass among them to produce CF. 

Two  advantages  were   hoped  for  by  keeping  the   Beings  highly
structured.
The interactions with the user were terribly brittle, but
the complex tasks put to the pool ⊗4were⊗* successfully completed. 

Remember: Beings distribute the responsibility  for
writing  code and  for recognizing  relevance, for knowledge  and for
control,  to  a large community of experts,  each  having  a  similar
anatomy. 

What ⊗4are⊗*  Beings good for?   The idea of a fixed  set of parts is
useful if the  mass of knowledge  is too huge  for one individual  to
keep "on top" of.  It then should be  organized in a very uniform way
(to  simplify preparing it for  storage), yet it must  also be highly
structured (to speed  up retrieval).   Beings are  big and slow,  but
well-suited to organizing knowledge in ways meaningful to how it can be
used. 
.COMMENT table of contents;

.EVERY HEADING(,,)
.PORTION CONTENTS
.NOFILL
.NARROW 6,8
.BEGIN CENTER
.GROUP SKIP 3
.SELECT 5

BEINGS

.SELECT 2

KNOWLEDGE AS INTERACTING EXPERTS



.TURN ON "{∞→"







Douglas B. Lenat




Artificial Intelligence Laboratory

Stanford University











⊗4Text of IJCAI4 Talk:⊗*

⊗6Prepared  {DATE}⊗*

.SKIP TO COLUMN 1
⊗5Table of Contents⊗*


.END

.PREFACE 10 MILLS
.TURN ON "{∞→"
.NARROW 7,7
.RECEIVE

.PAGE←0
.SELECT 1